16 research outputs found

    Visual adaptation alters the apparent speed of real-world actions

    Get PDF
    The apparent physical speed of an object in the field of view remains constant despite variations in retinal velocity due to viewing conditions (velocity constancy). For example, people and cars appear to move across the field of view at the same objective speed regardless of distance. In this study a series of experiments investigated the visual processes underpinning judgements of objective speed using an adaptation paradigm and video recordings of natural human locomotion. Viewing a video played in slow-motion for 30seconds caused participants to perceive subsequently viewed clips played at standard speed as too fast, so playback had to be slowed down in order for it to appear natural; conversely after viewing fast-forward videos for 30seconds, playback had to be speeded up in order to appear natural. The perceived speed of locomotion shifted towards the speed depicted in the adapting video (‘re-normalisation’). Results were qualitatively different from those obtained in previously reported studies of retinal velocity adaptation. Adapting videos that were scrambled to remove recognizable human figures or coherent motion caused significant, though smaller shifts in apparent locomotion speed, indicating that both low-level and high-level visual properties of the adapting stimulus contributed to the changes in apparent speed

    Hybrid video quality prediction: reviewing video quality measurement for widening application scope

    Get PDF
    A tremendous number of objective video quality measurement algorithms have been developed during the last two decades. Most of them either measure a very limited aspect of the perceived video quality or they measure broad ranges of quality with limited prediction accuracy. This paper lists several perceptual artifacts that may be computationally measured in an isolated algorithm and some of the modeling approaches that have been proposed to predict the resulting quality from those algorithms. These algorithms usually have a very limited application scope but have been verified carefully. The paper continues with a review of some standardized and well-known video quality measurement algorithms that are meant for a wide range of applications, thus have a larger scope. Their individual artifacts prediction accuracy is usually lower but some of them were validated to perform sufficiently well for standardization. Several difficulties and shortcomings in developing a general purpose model with high prediction performance are identified such as a common objective quality scale or the behavior of individual indicators when confronted with stimuli that are out of their prediction scope. The paper concludes with a systematic framework approach to tackle the development of a hybrid video quality measurement in a joint research collaboration.Polish National Centre for Research and Development (NCRD) SP/I/1/77065/10, Swedish Governmental Agency for Innovation Systems (Vinnova

    Temporal Integration of Movement: The Time-Course of Motion Streaks Revealed by Masking

    Get PDF
    Temporal integration in the visual system causes fast-moving objects to leave oriented ‘motion streaks’ in their wake, which could be used to facilitate motion direction perception. Temporal integration is thought to occur over 100 ms in early cortex, although this has never been tested for motion streaks. Here we compare the ability of fast-moving (‘streaky’) and slow-moving fields of dots to mask briefly flashed gratings either parallel or orthogonal to the motion trajectory. Gratings were presented at various asynchronies relative to motion onset (from to ms) to sample the time-course of the accumulating streaks. Predictions were that masking would be strongest for the fast parallel condition, and would be weak at early asynchronies and strengthen over time as integration rendered the translating dots more streaky and grating-like. The asynchrony where the masking function reached a plateau would correspond to the temporal integration period. As expected, fast-moving dots caused greater masking of parallel gratings than orthogonal gratings, and slow motion produced only modest masking of either grating orientation. Masking strength in the fast, parallel condition increased with time and reached a plateau after 77 ms, providing an estimate of the temporal integration period for mechanisms encoding motion streaks. Interestingly, the greater masking by fast motion of parallel compared with orthogonal gratings first reached significance at 48 ms before motion onset, indicating an effect of backward masking by motion streaks

    Perceptual Learning in the Absence of Task or Stimulus Specificity

    Get PDF
    Performance on most sensory tasks improves with practice. When making particularly challenging sensory judgments, perceptual improvements in performance are tightly coupled to the trained task and stimulus configuration. The form of this specificity is believed to provide a strong indication of which neurons are solving the task or encoding the learned stimulus. Here we systematically decouple task- and stimulus-mediated components of trained improvements in perceptual performance and show that neither provides an adequate description of the learning process. Twenty-four human subjects trained on a unique combination of task (three-element alignment or bisection) and stimulus configuration (vertical or horizontal orientation). Before and after training, we measured subjects' performance on all four task-configuration combinations. What we demonstrate for the first time is that learning does actually transfer across both task and configuration provided there is a common spatial axis to the judgment. The critical factor underlying the transfer of learning effects is not the task or stimulus arrangements themselves, but rather the recruitment of commons sets of neurons most informative for making each perceptual judgment

    Common Cortical Loci Are Activated during Visuospatial Interpolation and Orientation Discrimination Judgements

    Get PDF
    There is a wealth of literature on the role of short-range interactions between low-level orientation-tuned filters in the perception of discontinuous contours. However, little is known about how spatial information is integrated across more distant regions of the visual field in the absence of explicit local orientation cues, a process referred to here as visuospatial interpolation (VSI). To examine the neural correlates of VSI high field functional magnetic resonance imaging was used to study brain activity while observers either judged the alignment of three Gabor patches by a process of interpolation or discriminated the local orientation of the individual patches. Relative to a fixation baseline the two tasks activated a largely over-lapping network of regions within the occipito-temporal, occipito-parietal and frontal cortices. Activated clusters specific to the orientation task (orientation>interpolation) included the caudal intraparietal sulcus, an area whose role in orientation encoding per se has been hotly disputed. Surprisingly, there were few task-specific activations associated with visuospatial interpolation (VSI>orientation) suggesting that largely common cortical loci were activated by the two experimental tasks. These data are consistent with previous studies that suggest higher level grouping processes -putatively involved in VSI- are automatically engaged when the spatial properties of a stimulus (e.g. size, orientation or relative position) are used to make a judgement
    corecore